13 research outputs found

    Cooperation in Wireless Sensor Networks with Intra and Inter Cluster Interference

    Get PDF
    Virtual MIMO configuration, a common model for cooperation in sensor networks, trades off cooperation cost in front of MIMO gains. Most of proposed approaches rely mainly on the fact that cooperation at transmitter side alone seems to be much more powerful than receiver cooperation alone. The scenario that is analysed in this contribution includes the effect of interference of other clusters located closely that clearly degrades whatever cooperation type aforementioned. Under these circumstances, the use of additional sensors at receiver side helps creating a set of virtual beamformers, optimally designed to cancel the undesired signal. So, transmitter cooperation based on Dirty Paper Coding (DPC) strategies to minimize intra-cluster interference and virtual beamformers to minimize inter-cluster interference seems to be a very satisfactory combination

    Shuffled Multi-Channel Sparse Signal Recovery

    Full text link
    Mismatches between samples and their respective channel or target commonly arise in several real-world applications. For instance, whole-brain calcium imaging of freely moving organisms, multiple-target tracking or multi-person contactless vital sign monitoring may be severely affected by mismatched sample-channel assignments. To systematically address this fundamental problem, we pose it as a signal reconstruction problem where we have lost correspondences between the samples and their respective channels. Assuming that we have a sensing matrix for the underlying signals, we show that the problem is equivalent to a structured unlabeled sensing problem, and establish sufficient conditions for unique recovery. To the best of our knowledge, a sampling result for the reconstruction of shuffled multi-channel signals has not been considered in the literature and existing methods for unlabeled sensing cannot be directly applied. We extend our results to the case where the signals admit a sparse representation in an overcomplete dictionary (i.e., the sensing matrix is not precisely known), and derive sufficient conditions for the reconstruction of shuffled sparse signals. We propose a robust reconstruction method that combines sparse signal recovery with robust linear regression for the two-channel case. The performance and robustness of the proposed approach is illustrated in an application related to whole-brain calcium imaging. The proposed methodology can be generalized to sparse signal representations other than the ones considered in this work to be applied in a variety of real-world problems with imprecise measurement or channel assignment.Comment: Submitted to TS

    A fitting algorithm for random modeling the PLC channel

    Get PDF
    The characteristics of the power-line communication (PLC) channel are difficult to model due to the heterogeneity of the networks and the lack of common wiring practices. To obtain the full variability of the PLC channel, random channel generators are of great importance for the design and testing of communication algorithms. In this respect, we propose a random channel generator that is based on the top-down approach. Basically, we describe the multipath propagation and the coupling effects with an analytical model. We introduce the variability into a restricted set of parameters and, finally, we fit the model to a set of measured channels. The proposed model enables a closed-form description of both the mean path-loss profile and the statistical correlation function of the channel frequency response. As an example of application, we apply the procedure to a set of in-home measured channels in the band 2-100 MHz whose statistics are available in the literature. The measured channels are divided into nine classes according to their channel capacity. We provide the parameters for the random generation of channels for all nine classes, and we show that the results are consistent with the experimental ones. Finally, we merge the classes to capture the entire heterogeneity of in-home PLC channels. In detail, we introduce the class occurrence probability, and we present a random channel generator that targets the ensemble of all nine classes. The statistics of the composite set of channels are also studied, and they are compared to the results of experimental measurement campaigns in the literature

    Virtual MIMO RADAR using OFDM-CDM Waveforms

    Get PDF
    This paper addresses a new perspective on the exploitation ofdiversity resembling recent seminal proposals with multiple antennas known as MIMO radar. Our focus pursues similar advantages as spatial MIMO systems but intending to achieve the desired resistance over fading or/and SNR increase without relying on multiple antennas. We design an OFDM-CDM waveform well inspired in modern communications systems that creates a virtual MIMO system operating on the artificial 2D domain formed by a set ofwell separated carriers (OFDM) and several OFDM symbols each one modulated by orthogonal codes (CDM). We consider the most general scenario with moving targets and large size targets originating an equivalent time variant andfrequency selective channel model. Our proposal proceeds in two steps, a first one to reorthogonalize the transmitted set of OFDMsignal by proper time andfrequency synchronization (this stage provides range and velocity estimators), and a second one based on Neyman Pearson detection improved by the diversity gain

    Improved Animal Tracking Algorithm using Distributed Kalman Filter-based Algortihms

    Full text link
    Animal tracking has been addressed by different initiatives over the last two decades. Most of them rely on satellite connectivity on every single node and lack of energy-saving strategies. This paper presents several new contributions on the tracking of dynamic heterogeneous asynchronous networks (primary nodes with GPS and secondary nodes with a kinetic generator) motivated by the animal tracking paradigm with random transmissions. A simple approach based on connectivity and coverage intersection is compared with more sophisticated algorithms based on ad-hoc implementations of distributed Kalman-based filters that integrate measurement information using Consensus principles in order to provide enhanced accuracy. Several simulations varying the coverage range, the random behavior of the kinetic generator (modeled as a Poisson Process) and the periodic activation of GPS are included. In addition, this study is enhanced with HW developments and implementations on commercial off-the-shelf equipment which show the feasibility for performing these proposals on real hardware

    Distributed collaborative processing in wireless sensor networks with application to target localization and beamforming

    Full text link
    Abstract The proliferation of wireless sensor networks and the variety of envisioned applications associated with them has motivated the development of distributed algorithms for collaborative processing over networked systems. One of the applications that has attracted the attention of the researchers is that of target localization where the nodes of the network try to estimate the position of an unknown target that lies within its coverage area. Particularly challenging is the problem of estimating the target’s position when we use received signal strength indicator (RSSI) due to the nonlinear relationship between the measured signal and the true position of the target. Many of the existing approaches suffer either from high computational complexity (e.g., particle filters) or lack of accuracy. Further, many of the proposed solutions are centralized which make their application to a sensor network questionable. Depending on the application at hand and, from a practical perspective it could be convenient to find a balance between localization accuracy and complexity. Into this direction we approach the maximum likelihood location estimation problem by solving a suboptimal (and more tractable) problem. One of the main advantages of the proposed scheme is that it allows for a decentralized implementation using distributed processing tools (e.g., consensus and convex optimization) and therefore, it is very suitable to be implemented in real sensor networks. If further accuracy is needed an additional refinement step could be performed around the found solution. Under the assumption of independent noise among the nodes such local search can be done in a fully distributed way using a distributed version of the Gauss-Newton method based on consensus. Regardless of the underlying application or function of the sensor network it is al¬ways necessary to have a mechanism for data reporting. While some approaches use a special kind of nodes (called sink nodes) for data harvesting and forwarding to the outside world, there are however some scenarios where such an approach is impractical or even impossible to deploy. Further, such sink nodes become a bottleneck in terms of traffic flow and power consumption. To overcome these issues instead of using sink nodes for data reporting one could use collaborative beamforming techniques to forward directly the generated data to a base station or gateway to the outside world. In a dis-tributed environment like a sensor network nodes cooperate in order to form a virtual antenna array that can exploit the benefits of multi-antenna communications. In col-laborative beamforming nodes synchronize their phases in order to add constructively at the receiver. Some of the inconveniences associated with collaborative beamforming techniques is that there is no control over the radiation pattern since it is treated as a random quantity. This may cause interference to other coexisting systems and fast bat-tery depletion at the nodes. Since energy-efficiency is a major design issue we consider the development of a distributed collaborative beamforming scheme that maximizes the network lifetime while meeting some quality of service (QoS) requirement at the re¬ceiver side. Using local information about battery status and channel conditions we find distributed algorithms that converge to the optimal centralized beamformer. While in the first part we consider only battery depletion due to communications beamforming, we extend the model to account for more realistic scenarios by the introduction of an additional random energy consumption. It is shown how the new problem generalizes the original one and under which conditions it is easily solvable. By formulating the problem under the energy-efficiency perspective the network’s lifetime is significantly improved. Resumen La proliferación de las redes inalámbricas de sensores junto con la gran variedad de posi¬bles aplicaciones relacionadas, han motivado el desarrollo de herramientas y algoritmos necesarios para el procesado cooperativo en sistemas distribuidos. Una de las aplicaciones que suscitado mayor interés entre la comunidad científica es la de localization, donde el conjunto de nodos de la red intenta estimar la posición de un blanco localizado dentro de su área de cobertura. El problema de la localization es especialmente desafiante cuando se usan niveles de energía de la seal recibida (RSSI por sus siglas en inglés) como medida para la localization. El principal inconveniente reside en el hecho que el nivel de señal recibida no sigue una relación lineal con la posición del blanco. Muchas de las soluciones actuales al problema de localization usando RSSI se basan en complejos esquemas centralizados como filtros de partículas, mientas que en otras se basan en esquemas mucho más simples pero con menor precisión. Además, en muchos casos las estrategias son centralizadas lo que resulta poco prácticos para su implementación en redes de sensores. Desde un punto de vista práctico y de implementation, es conveniente, para ciertos escenarios y aplicaciones, el desarrollo de alternativas que ofrezcan un compromiso entre complejidad y precisión. En esta línea, en lugar de abordar directamente el problema de la estimación de la posición del blanco bajo el criterio de máxima verosimilitud, proponemos usar una formulación subóptima del problema más manejable analíticamente y que ofrece la ventaja de permitir en¬contrar la solución al problema de localization de una forma totalmente distribuida, convirtiéndola así en una solución atractiva dentro del contexto de redes inalámbricas de sensores. Para ello, se usan herramientas de procesado distribuido como los algorit¬mos de consenso y de optimización convexa en sistemas distribuidos. Para aplicaciones donde se requiera de un mayor grado de precisión se propone una estrategia que con¬siste en la optimización local de la función de verosimilitud entorno a la estimación inicialmente obtenida. Esta optimización se puede realizar de forma descentralizada usando una versión basada en consenso del método de Gauss-Newton siempre y cuando asumamos independencia de los ruidos de medida en los diferentes nodos. Independientemente de la aplicación subyacente de la red de sensores, es necesario tener un mecanismo que permita recopilar los datos provenientes de la red de sensores. Una forma de hacerlo es mediante el uso de uno o varios nodos especiales, llamados nodos “sumidero”, (sink en inglés) que actúen como centros recolectores de información y que estarán equipados con hardware adicional que les permita la interacción con el exterior de la red. La principal desventaja de esta estrategia es que dichos nodos se convierten en cuellos de botella en cuanto a tráfico y capacidad de cálculo. Como alter¬nativa se pueden usar técnicas cooperativas de conformación de haz (beamforming en inglés) de manera que el conjunto de la red puede verse como un único sistema virtual de múltiples antenas y, por tanto, que exploten los beneficios que ofrecen las comu¬nicaciones con múltiples antenas. Para ello, los distintos nodos de la red sincronizan sus transmisiones de manera que se produce una interferencia constructiva en el recep¬tor. No obstante, las actuales técnicas se basan en resultados promedios y asintóticos, cuando el número de nodos es muy grande. Para una configuración específica se pierde el control sobre el diagrama de radiación causando posibles interferencias sobre sis¬temas coexistentes o gastando más potencia de la requerida. La eficiencia energética es una cuestión capital en las redes inalámbricas de sensores ya que los nodos están equipados con baterías. Es por tanto muy importante preservar la batería evitando cambios innecesarios y el consecuente aumento de costes. Bajo estas consideraciones, se propone un esquema de conformación de haz que maximice el tiempo de vida útil de la red, entendiendo como tal el máximo tiempo que la red puede estar operativa garantizando unos requisitos de calidad de servicio (QoS por sus siglas en inglés) que permitan una decodificación fiable de la señal recibida en la estación base. Se proponen además algoritmos distribuidos que convergen a la solución centralizada. Inicialmente se considera que la única causa de consumo energético se debe a las comunicaciones con la estación base. Este modelo de consumo energético es modificado para tener en cuenta otras formas de consumo de energía derivadas de procesos inherentes al funcionamiento de la red como la adquisición y procesado de datos, las comunicaciones locales entre nodos, etc. Dicho consumo adicional de energía se modela como una variable aleatoria en cada nodo. Se cambia por tanto, a un escenario probabilístico que generaliza el caso determinista y se proporcionan condiciones bajo las cuales el problema se puede resolver de forma eficiente. Se demuestra que el tiempo de vida de la red mejora de forma significativa usando el criterio propuesto de eficiencia energética

    Reconstruction of Multivariate Sparse Signals from Mismatched Samples

    No full text
    Erroneous correspondences between samples and their respective channel or target commonly arise in several real-world applications. For instance, whole-brain calcium imaging of freely moving organisms, multiple target tracking or multi-person contactless vital sign monitoring may be severely affected by mismatched sample-channel assignments. To systematically address this fundamental problem, we pose it as a signal reconstruction problem where we have lost correspondences between the samples and their respective channels. We show that under the assumption that the signals of interest admit a sparse representation over an overcomplete dictionary, unique signal recovery is possible. Our derivations reveal that the problem is equivalent to a structured unlabeled sensing problem without precise knowledge of the sensing matrix. Unfortunately, existing methods are neither robust to errors in the regressors nor do they exploit the structure of the problem. Therefore, we propose a novel robust two-step approach for the reconstruction of shuffled sparse signals. The performance and robustness of the proposed approach is illustrated in an application of whole-brain calcium imaging in computational neuroscience. The proposed framework can be generalized to sparse signal representations other than the ones considered in this work to be applied in a variety of real-world problems with imprecise measurement or channel assignment
    corecore